Model comparison

To demonstrate the use of model comparison criteria in PyMC3, we implement the 8 schools example from Section 5.5 of Gelman et al (2003), which attempts to infer the effects of coaching on SAT scores of students from 8 schools. Below, we fit a pooled model, which assumes a single fixed effect across all schools, and a hierarchical model that allows for a random effect that partially pools the data.


In [1]:
%matplotlib inline
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')

The data include the observed treatment effects and associated standard deviations in the 8 schools.


In [2]:
J = 8
y = np.array([28,  8, -3,  7, -1,  1, 18, 12])
sigma = np.array([15, 10, 16, 11,  9, 11, 10, 18])

Pooled model


In [3]:
with pm.Model() as pooled:
    mu = pm.Normal('mu', 0, sd=1e6)
    
    obs = pm.Normal('obs', mu, sd=sigma, observed=y)
    
    trace_p = pm.sample(2000)


Average ELBO = -42.09: 100%|██████████| 200000/200000 [00:17<00:00, 11675.35it/s] , 9446.54it/s]
100%|██████████| 2000/2000 [00:01<00:00, 1747.24it/s]

In [4]:
pm.traceplot(trace_p);


Hierarchical model


In [5]:
with pm.Model() as hierarchical:
    
    eta = pm.Normal('eta', 0, 1, shape=J)
    mu = pm.Normal('mu', 0, sd=1e6)
    tau = pm.HalfCauchy('tau', 5)
    
    theta = pm.Deterministic('theta', mu + tau*eta)
    
    obs = pm.Normal('obs', theta, sd=sigma, observed=y)
    
    trace_h = pm.sample(2000)


Average ELBO = -43.175: 100%|██████████| 200000/200000 [00:23<00:00, 8435.73it/s]3, 8478.86it/s]
100%|██████████| 2000/2000 [00:03<00:00, 477.25it/s]

In [6]:
pm.traceplot(trace_h, varnames=['mu']);



In [7]:
pm.forestplot(trace_h, varnames=['theta']);


Deviance Information Criterion (DIC)

DIC (Spiegelhalter et al. 2002) is an information theoretic criterion for estimating predictive accuracy that is analogous to Akaike's Information Criterion (AIC). It is a more Bayesian approach that allows for the modeling of random effects, replacing the maximum likelihood estimate with the posterior mean and using the effective number of parameters to correct for bias.


In [8]:
pooled_dic = pm.dic(trace_p, pooled)

pooled_dic


Out[8]:
90.832077760613302

In [9]:
hierarchical_dic = pm.dic(trace_h, hierarchical)
    
hierarchical_dic


Out[9]:
124.82406598767808

Widely-applicable Information Criterion (WAIC)

WAIC (Watanabe 2010) is a fully Bayesian criterion for estimating out-of-sample expectation, using the computed log pointwise posterior predictive density (LPPD) and correcting for the effective number of parameters to adjust for overfitting.


In [10]:
pooled_waic = pm.waic(trace_p, pooled)
    
pooled_waic.WAIC


Out[10]:
61.121151094964134

In [11]:
hierarchical_waic = pm.waic(trace_h, hierarchical)
    
hierarchical_waic.WAIC


Out[11]:
61.570270537072844

PyMC3 includes two convenience functions to help compare WAIC for different models. The first of this functions is compare, this one computes WAIC (or LOO) from a set of traces and models and returns a DataFrame.


In [12]:
df_comp_WAIC = pm.compare((trace_h, trace_p), (hierarchical, pooled))
df_comp_WAIC


Out[12]:
WAIC pWAIC dWAIC weight SE dSE warning
1 61.1212 0.674471 0 0.555905 2.20084 0 0
0 61.5703 1.08163 0.449119 0.444095 1.96714 0.0259316 0

We have many columns so let check one by one the meaning of them:

  1. The first column clearly contains the values of WAIC. The DataFrame is always sorted from lowest to highest WAIC. The index reflects the order in which the models are passed to this function.

  2. The second column is the estimated effective number of parameters. In general, models with more parameters will be more flexible to fit data and at the same time could also lead to overfitting. Thus we can interpret pWAIC as a penalization term, intuitively we can also interpret it as measure of how flexible each model is in fitting the data.

  3. The third column is the relative difference between the value of WAIC for the top-ranked model and the value of WAIC for each model. For this reason we will always get a value of 0 for the first model.

  4. Sometimes when comparing models, we do not want to select the "best" model, instead we want to perform predictions by averaging along all the models (or at least several models). Ideally we would like to perform a weighted average, giving more weight to the model that seems to explain/predict the data better. There are many approaches to perform this task, one of them is to use Akaike weights based on the values of WAIC for each model. These weights can be loosely interpreted as the probability of each model (among the compared models) given the data. One caveat of this approach is that the weights are based on point estimates of WAIC (i.e. the uncertainty is ignored).

  5. The fifth column records the standard error for the WAIC computations. The standard error can be useful to assess the uncertainty of the WAIC estimates. Nevertheless, caution need to be taken because the estimation of the standard error assumes normality and hence could be problematic when the sample size is low.

  6. In the same way that we can compute the standard error for each value of WAIC, we can compute the standard error of the differences between two values of WAIC. Notice that both quantities are not necessarily the same, the reason is that the uncertainty about WAIC is correlated between models. This quantity is always 0 for the top-ranked model.

  7. Finally we have the last column named "warning". A value of 1 indicates that the computation of WAIC may not be reliable, this warning is based on an empirical determined cutoff value and need to be interpreted with caution. For more details you can read this paper.

The second convenience function takes the output of compare and produces a summary plot in the style of the one used in the book Statistical Rethinking by Richard McElreath (check also this port of the examples in the book to PyMC3).


In [13]:
pm.compare_plot(df_comp_WAIC);


The empty circle represents the values of WAIC and the black error bars associated with them are the values of the standard deviation of WAIC.

The value of the lowest WAIC is also indicated with a vertical dashed grey line to ease comparison with other WAIC values.

The filled black dots are the in-sample deviance of each model, which for WAIC is 2 pWAIC from the corresponding WAIC value.

For all models except the top-ranked one we also get a triangle indicating the value of the difference of WAIC between that model and the top model and a grey errobar indicating the standard error of the differences between the top-ranked WAIC and WAIC for each model.

Leave-one-out Cross-validation (LOO)

LOO cross-validation is an estimate of the out-of-sample predictive fit. In cross-validation, the data are repeatedly partitioned into training and holdout sets, iteratively fitting the model with the former and evaluating the fit with the holdout data. Vehtari et al. (2016) introduced an efficient computation of LOO from MCMC samples, which are corrected using Pareto-smoothed importance sampling (PSIS) to provide an estimate of point-wise out-of-sample prediction accuracy.


In [14]:
pooled_loo = pm.loo(trace_p, pooled)
    
pooled_loo.LOO


/home/osvaldo/anaconda3/lib/python3.5/site-packages/pymc3/stats.py:208: UserWarning: Estimated shape parameter of Pareto distribution is
        greater than 0.7 for one or more samples.
        You should consider using a more robust model, this is
        because importance sampling is less likely to work well if the marginal
        posterior and LOO posterior are very different. This is more likely to
        happen with a non-robust model and highly influential observations.
  happen with a non-robust model and highly influential observations.""")
Out[14]:
61.591020562529295

In [15]:
hierarchical_loo  = pm.loo(trace_h, hierarchical)
    
hierarchical_loo.LOO


/home/osvaldo/anaconda3/lib/python3.5/site-packages/pymc3/stats.py:208: UserWarning: Estimated shape parameter of Pareto distribution is
        greater than 0.7 for one or more samples.
        You should consider using a more robust model, this is
        because importance sampling is less likely to work well if the marginal
        posterior and LOO posterior are very different. This is more likely to
        happen with a non-robust model and highly influential observations.
  happen with a non-robust model and highly influential observations.""")
Out[15]:
61.655191059793324

We can also use compare with LOO.


In [16]:
df_comp_LOO = pm.compare((trace_h, trace_p), (hierarchical, pooled), ic='LOO')
df_comp_LOO


Out[16]:
LOO pLOO dLOO weight SE dSE warning
1 61.591 0.909406 0 0.508021 2.13514 0 1
0 61.6552 1.12409 0.0641705 0.491979 1.9971 0.0258501 1

The columns return the equivalent values for LOO, notice that in this example we get two warnings. Also notice that the order of the models is not the same as the one for WAIC.

We can also plot the results


In [17]:
pm.compare_plot(df_comp_LOO);


Interpretation

Though we might expect the hierarchical model to outperform a complete pooling model, there is little to choose between the models in this case, giving that both models gives very similar values of the information criteria. This is more clearly appreciated when we take into account the uncertainty (in terms of standard errors) of WAIC and LOO.

Reference

Gelman, A., Hwang, J., & Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. Statistics and Computing, 24(6), 997–1016.

Vehtari, A, Gelman, A, Gabry, J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing